This is a course about statistics with open source tools: R, RStudio, and so on.
This course is massive!
Every week there will be a submission with a deadline on the following Sunday at 23:55. Late submissions will not be accepted.
After that, three other projects will be assigned for peer review. The deadline for the peer reviews is Wednesday the same week at 23:55.
The exercises and the peer reviews are all that is required for this course.
Here is the link to my GitHub repository.
Load the wrangled data and take a look at it.
learning2014 <- read.table("data/learning2014.tsv", sep = "\t")
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
Plot the data.
library(ggplot2)
# initialize plot with data and aesthetic mapping
p1 <- ggplot(learning2014, aes(x = attitude, y = points, col = gender))
# define the visualization type (points)
p2 <- p1 + geom_point()
# add a regression line
p3 <- p2 + geom_smooth(method = "lm")
# add a main title and draw the plot
p4 <- p3 + ggtitle("Student's attitude versus exam points")
# draw the plot
p4
There is a positive correlation between attitude and points. There is no obvious difference between the two genders. Interestingly, there are only two genders.
Now, plot all possible pairs of variables as scatter plots:
pairs(learning2014[-1])
This is not very informative, since the plots are very small and there is no regression line to help us imagine which way the correlation goes.
Drawing more advanced plots:
library(GGally)
library(ggplot2)
# create a more advanced plot matrix with ggpairs()
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha = .3), lower = list(combo = wrap("facethist", bins = 20)))
# draw the plot
p
Age seems to follow a Poisson distribution, probably because the subjects are students. The rest of the variables seem to have a normal distribution, as expected.
Apparently, attitude and points have the highest correlation by far.
In addition, there is a negative correlation between surf and deep.
There is a slight negative correlation between surf and points, surf and age, and surf and stra, as well as a positive correlation between stra and points.
Now, a regression analysis using the three variables that had the highest individual correlation with points:
# create a regression model with three explanatory variables
my_model2 <- lm(points ~ attitude + stra + surf, data = learning2014)
# print out a summary of the model
summary(my_model2)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
There seems to be a statistically significant relationship between the chosen variables. Here are the diagnostic plots:
par(mfrow = c(2,2))
plot(my_model2, which = c(1, 2, 5))
Nothing looks out of the ordinary in these diagnostic plots. There is no reason not to trust the hypothesis that all three chosen variables – attitude, stra, and surf, are explanatory variables for points. The correlation of course is not incredibly strong.
The data was joined by using following columns as surrogate identifiers for students: school, sex, age, address, famsize, Pstatus, Medu, Fedu, Mjob, Fjob, reason, nursery, internet.
Defined two new variables: alc_use and high_use.
alc <- read.csv("data/alc.csv")
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
## $ sex : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
## $ famsize : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
## $ Pstatus : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
## $ Fjob : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
## $ reason : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
## $ nursery : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
## $ internet : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
## $ guardian : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
## $ famsup : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
## $ paid : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
## $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
## $ higher : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
## $ romantic : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
The following four variables were chosen:
goout, sex, studytime, and romantic.
The hypothesis is that going out leads to dringking and males drink more because they are heavier on average. Furthermore, drinking and going out leaves less time for studying. Being in a relationship should reduce alcohol consumption since there is no need to get wasted and meet people.
Explore the variables of interest:
library(tidyr)
library(dplyr)
library(ggplot2)
alc %>% group_by(high_use) %>% summarise(count = n(), mean_goout=mean(goout),
mean_studytime=mean(studytime))
## # A tibble: 2 x 4
## high_use count mean_goout mean_studytime
## <lgl> <int> <dbl> <dbl>
## 1 FALSE 268 2.85 2.15
## 2 TRUE 114 3.72 1.77
alc %>% group_by(high_use, sex) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups: high_use [?]
## high_use sex count
## <lgl> <fct> <int>
## 1 FALSE F 156
## 2 FALSE M 112
## 3 TRUE F 42
## 4 TRUE M 72
alc %>% group_by(high_use, romantic) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups: high_use [?]
## high_use romantic count
## <lgl> <fct> <int>
## 1 FALSE no 180
## 2 FALSE yes 88
## 3 TRUE no 81
## 4 TRUE yes 33
g_goout <- ggplot(alc, aes(x = goout, fill=high_use)) +
geom_bar() + xlab("Going out with friends") +
ggtitle("Going out with friends from 1 (very low) to 5 (very high) by alcohol use")
g_studytime <- ggplot(alc, aes(x = studytime, fill=high_use)) +
geom_bar() + xlab("Weekly study time") +
ggtitle("Weekly study time [1 (<2 hours), 2 (2 to 5 hours), 3 (5 to 10 hours), or 4 (>10 hours)] by alchol use")
g_sex <- ggplot(alc, aes(x = sex, fill=high_use)) +
geom_bar() +
ggtitle("Sex by alcohol use")
g_romantic <- ggplot(alc, aes(x = romantic, fill=high_use)) +
geom_bar() +
ggtitle("With a romantic relationship (yes/no) by alcohol use")
# Arrange the plots into a grid
library("gridExtra")
grid.arrange(g_goout, g_studytime, g_sex, g_romantic, ncol=2, nrow=2)
In summary, all parts of the hypothesis seem to be correct.
Fit a logistic regression model using high_use as the target variable and goout, studytime, sex, and romantic as explanatory variables.
m <- glm(high_use ~ goout + studytime + sex + romantic, data = alc, family = "binomial")
Variables goout, studytime and sex are associated with alcohol consumption.
High alcohol consumption is associated with going out (as expected).
Males who drink less study more.
Summary of the model:
summary(m)
##
## Call:
## glm(formula = high_use ~ goout + studytime + sex + romantic,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7365 -0.8114 -0.5009 0.9081 2.6642
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.6988 0.5712 -4.725 2.30e-06 ***
## goout 0.7536 0.1187 6.350 2.15e-10 ***
## studytime -0.4774 0.1683 -2.837 0.00456 **
## sexM 0.6657 0.2585 2.576 0.01000 *
## romanticyes -0.1424 0.2699 -0.528 0.59767
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 393.67 on 377 degrees of freedom
## AIC: 403.67
##
## Number of Fisher Scoring iterations: 4
Coefficients of the model as odds ratios and their confidence intervals:
or <- coef(m) %>% exp
ci <- confint(m) %>% exp
cbind(or, ci)
## or 2.5 % 97.5 %
## (Intercept) 0.06728696 0.02129867 0.2010636
## goout 2.12456419 1.69404697 2.7003422
## studytime 0.62040325 0.44145946 0.8558631
## sexM 1.94589655 1.17538595 3.2443631
## romanticyes 0.86724548 0.50714091 1.4648961
In summary, 1 unit increase in goout is associated with 2.1 increase in likelihood of high alcohol consumption.
1 unit increase in studytime is associated with 0.6 lower likelihood of high alcohol consumption.
Being male is associated with 1.9 times increase in likelihood of high alcohol consumption compared to being female.
Being in a romantic relationship is not significantly associated with a change in likelihood of high alcohol consumption, so the hypothesis was wrong.
Fit a logistic model with the explanatory variables that were statistically significantly associated to high or low alcohol consumption:
m <- glm(high_use ~ goout + studytime + sex, data = alc, family = "binomial")
Prediction performance of the model:
probability <- predict(m, type="response")
alc <- mutate(alc, probability=probability)
alc <- mutate(alc, prediction=probability > 0.5)
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 250 18
## TRUE 76 38
The model is better at predicting low alcohol consumption than high alcohol consumption.
Visualizing the class, the predicted probabilities, and the predicted class:
g <- ggplot(alc, aes(x = probability, y = high_use, col=prediction))
g + geom_point()
Calculate the total proportion of misclassified individuals using the regression model. Use a simple guessing strategy where everyone is classified to be in the most prevalent class:
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2460733
loss_func(class = alc$high_use, prob = 0)
## [1] 0.2984293
Using the regression model, 24.6% of the individuals are misclassified, compared to 29.8 % of misclassified individuals if guessing that everybody belongs to the low use of alcohol class. The model seems to provide modest improvement to the simple guess of the most prevalent class.
Perform 10-fold cross-validation of the model to estimate the performance of the model on unseen data. The performance of the model is measured with proportion of misclassified individuals. The mean prediction error in the test set:
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv$delta[1]
## [1] 0.2460733
The mean prediction error in the test set is 0.25, marginally better than the performance of the model in the DataCamp exercises, with a mean prediction error of 0.26 in the test set.
Construct models with different number of predictors and calculate the test set and training set prediction errors:
predictors <- c('school', 'sex', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu', 'Mjob', 'Fjob', 'reason', 'nursery', 'internet', 'guardian', 'traveltime', 'studytime', 'failures', 'schoolsup', 'famsup', 'paid', 'activities', 'higher', 'romantic', 'famrel', 'freetime', 'goout', 'health', 'absences', 'G1', 'G2', 'G3')
# Fit several models and record the test and traingin errors
# 1) Use all of the predictors.
# 2) Drop one predictor and fit a new model.
# 3) Continue until only one predictor is left in the model.
test_error <- numeric(length(predictors))
training_error <- numeric(length(predictors))
for(i in length(predictors):1) {
model_formula <- paste0("high_use ~ ", paste(predictors[1:i], collapse = " + "))
glmfit <- glm(model_formula, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
test_error[i] <- cv$delta[1]
training_error[i] <- loss_func(alc$high_use, predict(glmfit,type="response"))
}
data_error <- rbind(data.frame(n_predictors=1:length(predictors),
prediction_error=test_error,
type = "test error"),
data.frame(n_predictors=1:length(predictors),
prediction_error=training_error,
type = "training error"))
g <- ggplot(data_error, aes(x = n_predictors, y = prediction_error, col=type))
g + geom_point()
Load Boston dataset from the MASS package:
library(corrplot)
library(dplyr)
library(MASS)
data("Boston")
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The dataset has 14 variables and 506 observations. Full details can be found in the dataset’s documentation.
Summary of the variables in the dataset:
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Plot the variables and explore the data:
library(GGally)
library(ggplot2)
p <- ggpairs(Boston, mapping = aes(alpha=0.3),
lower = list(combo = wrap("facethist", bins = 20)))
p
Correlation of the variables:
cor(Boston) %>% corrplot(method = "circle", type = "upper", cl.pos = "b", tl.pos = "d")
Scaling the dataset so that the average is \(0\) and standard deviation is \(1\):
\[x_{scaled}=\frac{x - \mu_{x}}{\sigma_{x}}\] where \(\mu_{x}\) is the mean of \(x\) and \(\sigma_{x}\) is the standard deviation of \(x\).
boston_scaled <- scale(Boston) %>% as.data.frame()
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
Create a factor variable crime from the crim by dividing crim by quartiles to “low”, “med_low”, “med_high” and “high” categories:
bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE,
label=c("low", "med_low", "med_high", "high"))
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
Divide the dataset to training and test sets so that 80% belongs to the training set and 20% belongs to the test set:
set.seed(1)
train.idx <- sample(nrow(boston_scaled), size = 0.8 * nrow(boston_scaled))
train <- boston_scaled[train.idx,]
test <- boston_scaled[-train.idx,]
Fit the linear discriminant analysis (LDA) on the training set using the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables:
lda.fit <- lda(crime ~ ., data = train)
The LDA bi-plot:
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)) {
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]],
col = color, length = arrow_heads)
text(myscale * heads[,choices],
labels = row.names(heads),
cex = tex, col = color, pos = 3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 2)
Use the fitted LDA model to predict the categorical crime rate in the test set. Cross-tabulate the observed classes and the predicted classes:
correct_classes <- test$crime
test <- dplyr::select(test, -crime)
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 15 11 1 0
## med_low 8 20 2 0
## med_high 1 9 16 0
## high 0 0 0 19
Model seems to perform perfectly at predicting the “high” class and predicts the other classes reasonably well. The prediction accuracy is worst for the “low” class. The model misclassifies a large proportion of the “low” observations as “med_low”.
Reload the Boston dataset and standardize it as above. Calculate the Euclidean distance between the observations:
data("Boston")
boston_scaled <- scale(Boston) %>% as.data.frame()
dist_eu <- dist(boston_scaled)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
Run the k-means algorithm with 3 clusters and visualize the results:
# seeded above
km <-kmeans(boston_scaled, centers = 3)
pairs(boston_scaled, col = km$cluster)
Calculate the total of within cluster sum of squares (TWCSS) when the number of cluster changes from 1 to 10:
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')
The optimal number of clusters is when the total WCSS drops radically. Based on the graph, 2 seems to be the optimal number.
Perform k-means with 2 clusters and visualize the results:
km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)
Perform k-means clustering with 3 clusters on the scaled Boston dataset. Use the cluster assignments as the target variable for LDA analysis:
km <-kmeans(boston_scaled, centers = 3)
boston_scaled$kmeans_cluster <- km$cluster
lda.fit <- lda(kmeans_cluster ~ ., data = boston_scaled)
The LDA bi-plot:
plot(lda.fit, dimen = 2, col = boston_scaled$kmeans_cluster, pch = boston_scaled$kmeans_cluster)
lda.arrows(lda.fit, myscale = 2)
Based on the biplot, the most influential linear separators are age, dis, rad, and tax.
# Load libraries
library(corrplot)
library(dplyr)
library(tidyr)
Load the dataset:
human <- read.csv('data/human.csv', row.names = 1)
head(human)
## Edu2.FM Labo.FM Edu.Exp Life.Exp GNI Mat.Mor Ado.Birth
## Norway 1.0072389 0.8908297 17.5 81.6 64992 4 7.8
## Australia 0.9968288 0.8189415 20.2 82.4 42261 6 12.1
## Switzerland 0.9834369 0.8251001 15.8 83.0 56431 6 1.9
## Denmark 0.9886128 0.8840361 18.7 80.2 44025 5 5.1
## Netherlands 0.9690608 0.8286119 17.9 81.6 45435 6 6.2
## Germany 0.9927835 0.8072289 16.5 80.9 43919 7 3.8
## Parli.F
## Norway 39.6
## Australia 30.5
## Switzerland 28.5
## Denmark 38.0
## Netherlands 36.9
## Germany 36.9
str(human)
## 'data.frame': 155 obs. of 8 variables:
## $ Edu2.FM : num 1.007 0.997 0.983 0.989 0.969 ...
## $ Labo.FM : num 0.891 0.819 0.825 0.884 0.829 ...
## $ Edu.Exp : num 17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
## $ Life.Exp : num 81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
## $ GNI : int 64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
## $ Mat.Mor : int 4 6 6 5 6 7 9 28 11 8 ...
## $ Ado.Birth: num 7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
## $ Parli.F : num 39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
The dataset contains 155 observation of 8 variables. The dataset combines several indicators from most countries in the world. The country names are the rownames of the data frame and the variables are:
Health and knowledge: * GNI: Gross National Income per capita * Life.Exp: Life expectancy at birth * Edu.Exp: Expected years of schooling * Mat.Mor: Maternal mortality ratio * Ado.Birth: Adolescent birth rate
Empowerment: * Parli.F: Percentage of female representatives in parliament * Edu2.FM: Ratio of females/males with at least secondary education * Labo.FM: Ratio of females/males in the labour force
Visualize the distribution of the variables and their dependencies:
library(GGally)
library(ggplot2)
p <- ggpairs(human, mapping = aes(alpha=0.3),
lower = list(combo = wrap("facethist", bins = 20)))
p
Pairs-plot
Most of the variables have very skewed distributions, for example Lif.Exp and GNI. Only Edu.Exp seems to be almost normally distributed.
Visualize the correlation between the variables:
cor(human) %>% corrplot()
Correlation
There is a group of highly correlated variables: Edu.Exp, Lif.Exp, GNI, Mat.Mor, Ado.Birth. Edu.Exp and Lif.Exp are positively correlated; Edu.Exp and Mat.Mor are negatively correlated.
Summaries of the variables:
summary(human)
## Edu2.FM Labo.FM Edu.Exp Life.Exp
## Min. :0.1717 Min. :0.1857 Min. : 5.40 Min. :49.00
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:11.25 1st Qu.:66.30
## Median :0.9375 Median :0.7535 Median :13.50 Median :74.20
## Mean :0.8529 Mean :0.7074 Mean :13.18 Mean :71.65
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:15.20 3rd Qu.:77.25
## Max. :1.4967 Max. :1.0380 Max. :20.20 Max. :83.50
## GNI Mat.Mor Ado.Birth Parli.F
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
Perform principal component analysis (PCA) using singular value decomposition (SVD) method for un-standardized dataset. The variability captured by the principal components:
pca_human <- prcomp(human)
s <- summary(pca_human)
s
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01 0.0001 0.00 0.00 0.000 0.000 0.0000
## Cumulative Proportion 9.999e-01 1.0000 1.00 1.00 1.000 1.000 1.0000
## PC8
## Standard deviation 0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion 1.0000
pca_pr <- round(100*s$importance[2,], digits = 1)
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
biplot(pca_human, choices = 1:2, cex=c(0.7,1), col=c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
Countries plotted againts the two first principal components for the un-standardized dataset
Gross national income per capita GNI seems to explain most of the variation in the dataset. The values of the GNI variable are much larger than the other variables and it dominates in the PCA analysis of the un-standardized dataset.
Next, standardize the dataset so that every variable has a mean of 0 and a standard deviation of 1 and perform PCA on the standardized dataset. The summary for the standardized variables:
human_std <- scale(human)
summary(human_std)
## Edu2.FM Labo.FM Edu.Exp Life.Exp
## Min. :-2.8189 Min. :-2.6247 Min. :-2.7378 Min. :-2.7188
## 1st Qu.:-0.5233 1st Qu.:-0.5484 1st Qu.:-0.6782 1st Qu.:-0.6425
## Median : 0.3503 Median : 0.2316 Median : 0.1140 Median : 0.3056
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5958 3rd Qu.: 0.7350 3rd Qu.: 0.7126 3rd Qu.: 0.6717
## Max. : 2.6646 Max. : 1.6632 Max. : 2.4730 Max. : 1.4218
## GNI Mat.Mor Ado.Birth Parli.F
## Min. :-0.9193 Min. :-0.6992 Min. :-1.1325 Min. :-1.8203
## 1st Qu.:-0.7243 1st Qu.:-0.6496 1st Qu.:-0.8394 1st Qu.:-0.7409
## Median :-0.3013 Median :-0.4726 Median :-0.3298 Median :-0.1403
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.3712 3rd Qu.: 0.1932 3rd Qu.: 0.6030 3rd Qu.: 0.6127
## Max. : 5.6890 Max. : 4.4899 Max. : 3.8344 Max. : 3.1850
The variability captured by the principal components:
pca_human_std <- prcomp(human_std)
s <- summary(pca_human_std)
s
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion 0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
## PC7 PC8
## Standard deviation 0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion 0.98702 1.00000
pca_pr <- round(100*s$importance[2,], digits = 1)
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
biplot(pca_human_std, choices = 1:2, cex=c(0.7,1), col=c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
Countries plotted against the two first principal components for the standardized dataset.
The expected years of education, life expectancy at birth, gross national income per capita and the ratio of women to men with at least secondary education seem to correlate with each other as well as with PC1. Maternal mortality and adolescent birth rates are inversely correlated to the former variables and also correlated with PC1. The ratio of women to men in the labor force is correlated with the percentage of women representatives in the parliament and this is correlated with PC2.
The first principal component seems to separate the countries based on variables related to health, education and wealth. The second principal component captures the variability in the participation of women to work and political life in the society.
Load the tea dataset from the package FactoMineR. The dataset represents a questionnaire on tea made in 300 individuals: how they drink tea (18 questions), what are their product’s perceptions (12), and personal details. The structure of the dataset:
library("FactoMineR")
data(tea)
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
The dataset contains 300 observations of 36 variables. Visualize the dataset:
gather(tea) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 10))
Variables and the distribution of their values
Perform multiple correspondence analysis (MCA) on the dataset using a subset of variables: Tea, How, how, sugar, where, lunch:
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- dplyr::select(tea, one_of(keep_columns))
mca <- MCA(tea_time, graph = FALSE)
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
## Variance 0.279 0.261 0.219 0.189 0.177 0.156
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953
## Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.144 0.141 0.117 0.087 0.062
## % of var. 7.841 7.705 6.392 4.724 3.385
## Cumulative % of var. 77.794 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898
## cos2 v.test Dim.3 ctr cos2 v.test
## black 0.003 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 0.027 2.867 | 0.433 9.160 0.338 10.053 |
## green 0.107 -5.669 | -0.108 0.098 0.001 -0.659 |
## alone 0.127 -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 0.035 3.226 | 1.329 14.771 0.218 8.081 |
## milk 0.020 2.422 | 0.013 0.003 0.000 0.116 |
## other 0.102 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag 0.161 -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 0.478 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged 0.141 -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
The first dimension explains 15% of the variation and the second dimension 14% variation in the data. Variables how and where have the strongest link to the first and second dimension.
Visualize the MCA results:
plot(mca, invisible=c("ind"), habillage= "quali")
Variable biplot of the MCA analysis results on the ‘tea’ dataset with variables Tea, How, how, sugar, where, lunch.
Based on the plot, individuals who use unpackaged tea tend to buy their tea from tea shops, and prefer green tea. On the other hand, individuals who use tea bags buy them often from chain stores.
library(dplyr)
library(tidyr)
library(ggplot2)
In the BPRS dataset 40 male subjects were randomly assigned to one of two treatment groups. Each subject was rated on the brief psychiatric rating scale (BPRS) measured before treatment began (week 0) and then at weekly intervals for eight weeks. The BPRS assesses the level of 18 symptom constructs such as hostility, suspiciousness, hallucinations and grandiosity; each of these is rated from 1 (not present) to 7 (extremely severe). The scale is used to evaluate patients suspected of having schizophrenia.
Read the dataset:
BPRSL <- read.csv("data/BPRSL.csv")
BPRSL$treatment <- factor(BPRSL$treatment)
BPRSL$subject <- factor(BPRSL$subject)
str(BPRSL)
## 'data.frame': 360 obs. of 6 variables:
## $ X : int 1 2 3 4 5 6 7 8 9 10 ...
## $ treatment: Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ subject : Factor w/ 20 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
## $ weeks : Factor w/ 9 levels "week0","week1",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ bprs : int 42 58 54 55 72 48 71 30 41 57 ...
## $ week : int 0 0 0 0 0 0 0 0 0 0 ...
Plot the bprs values over time for each individual by treatment group:
ggplot(BPRSL, aes(x = week, y = bprs, linetype = subject)) +
geom_line() +
scale_linetype_manual(values = rep(1:10, times=4)) +
facet_grid(. ~ treatment, labeller = label_both) +
theme(legend.position = "none") +
scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))
The BPRS score and the variability of the score in both treatment groups decrease over time.
Standardize the scores for each time point: subtract the average bprs for all values and divide by the standard deviation:
BPRSL <- BPRSL %>%
group_by(week) %>%
mutate(stdbprs = scale(bprs)) %>%
ungroup()
glimpse(BPRSL)
## Observations: 360
## Variables: 7
## $ X <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 1...
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ subject <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 1...
## $ weeks <fct> week0, week0, week0, week0, week0, week0, week0, wee...
## $ bprs <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, ...
## $ week <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ stdbprs <dbl> -0.4245908, 0.7076513, 0.4245908, 0.4953559, 1.69836...
Plot the standardized values:
ggplot(BPRSL, aes(x = week, y = stdbprs, linetype = subject)) +
geom_line() +
scale_linetype_manual(values = rep(1:10, times=4)) +
facet_grid(. ~ treatment, labeller = label_both) +
scale_y_continuous(name = "standardized bprs")
Plot the average bprs for each time point for the two different treatment groups, and add the standard error of the means to the plots:
\[se = \frac{sd(x)}{\sqrt{n}}\]
n <- BPRSL$week %>% unique() %>% length()
BPRSS <- BPRSL %>%
group_by(treatment, week) %>%
summarise(mean = mean(bprs), se = sd(bprs)/sqrt(n) ) %>%
ungroup()
glimpse(BPRSS)
## Observations: 18
## Variables: 4
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2
## $ week <int> 0, 1, 2, 3, 4, 5, 6, 7, 8, 0, 1, 2, 3, 4, 5, 6, 7, 8
## $ mean <dbl> 47.00, 46.80, 43.55, 40.90, 36.60, 32.70, 29.70, 29....
## $ se <dbl> 4.534468, 5.173708, 4.003617, 3.744626, 3.259534, 2....
ggplot(BPRSS, aes(x = week, y = mean, linetype = treatment, shape = treatment)) +
geom_line() +
scale_linetype_manual(values = c(1,2)) +
geom_point(size=3) +
scale_shape_manual(values = c(1,2)) +
geom_errorbar(aes(ymin=mean-se, ymax=mean+se, linetype="1"), width=0.3) +
theme(legend.position = c(0.8,0.8)) +
scale_y_continuous(name = "average(bprs) +/- sem(bprs)")
The averaged profiles overlap completely when taking into account the standard errors of the mean. This suggests there is only a small difference between the treatment groups.
Compare the average bprs values between the treatment groups on weeks 1 to 8 by plotting the distribution of the averaged bprs values for the two groups:
BPRSL8S <- BPRSL %>%
filter(week > 0) %>%
group_by(treatment, subject) %>%
summarise(mean=mean(bprs)) %>%
ungroup()
glimpse(BPRSL8S)
## Observations: 40
## Variables: 3
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
## $ subject <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 1...
## $ mean <dbl> 41.500, 43.125, 35.375, 52.625, 50.375, 34.000, 37.1...
ggplot(BPRSL8S, aes(x = treatment, y = mean)) +
geom_boxplot() +
stat_summary(fun.y = "mean", geom = "point", shape=23, size=4, fill = "white") +
scale_y_continuous(name = "mean(bprs), weeks 1-8")
There is an outlier in group 2 with a bprs value of over 70. Remove it so that it does not bias the results:
BPRSL8S1 <- BPRSL %>%
filter(week > 0) %>%
group_by(treatment, subject) %>%
summarise(mean=mean(bprs)) %>%
ungroup() %>%
filter(mean < 70)
ggplot(BPRSL8S1, aes(x = treatment, y = mean)) +
geom_boxplot() +
stat_summary(fun.y = "mean", geom = "point", shape=23, size=4, fill = "white") +
scale_y_continuous(name = "mean(bprs), weeks 1-8")
Looking at the plot one might think that the average brps is lower for treatment group 2, but the variation of the mean inside the groups is larger.
Perform a t-test comparing the average bprs values between the treatment groups:
t.test(mean ~ treatment, data = BPRSL8S1, var.equal = TRUE)
##
## Two Sample t-test
##
## data: mean by treatment
## t = 0.52095, df = 37, p-value = 0.6055
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -4.232480 7.162085
## sample estimates:
## mean in group 1 mean in group 2
## 36.16875 34.70395
There is no statistically significant difference between the groups.
The baseline bprs value might be correlated with the chosen summary measure. Add that to the model to see if that will affect the difference between the treatment groups:
baseline <- BPRSL %>%
filter(week == 0) %>%
rename(baseline=bprs) %>%
dplyr::select(one_of(c("treatment", "subject", "baseline")))
BPRSL8S2 <- BPRSL8S %>%
left_join(baseline)
## Joining, by = c("treatment", "subject")
fit <- lm(mean ~ treatment + baseline, data = BPRSL8S2)
anova(fit)
## Analysis of Variance Table
##
## Response: mean
## Df Sum Sq Mean Sq F value Pr(>F)
## treatment 1 1.55 1.55 0.025 0.8752
## baseline 1 1869.97 1869.97 30.174 3.051e-06 ***
## Residuals 37 2292.97 61.97
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The baseline bprs values is strongly associated with the bprs values taken after treatment has begun; still, there is no evidence of a treatment difference even after conditioning on the baseline value.
The RATS dataset comes from a nutrition study conducted in three groups of… you guessed it, rats. The groups were on different diets. The body weight of each animal was recorder weekly over a period of 9 weeks (except in week 7: it was recorder twice).
Read in the dataset:
RATSL <- read.csv("data/RATSL.csv")
RATSL$ID <- factor(RATSL$ID)
RATSL$Group <- factor(RATSL$Group)
str(RATSL)
## 'data.frame': 176 obs. of 6 variables:
## $ X : int 1 2 3 4 5 6 7 8 9 10 ...
## $ ID : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
## $ Group : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 2 2 ...
## $ WD : Factor w/ 11 levels "WD1","WD15","WD22",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ Weight: int 240 225 245 260 255 260 275 245 410 405 ...
## $ Time : int 1 1 1 1 1 1 1 1 1 1 ...
Plot the RATSL dataset:
ggplot(RATSL, aes(x = Time, y = Weight, group = ID)) +
geom_line(aes(linetype = Group)) +
scale_x_continuous(name = "Time (days)", breaks = seq(0, 60, 10)) +
scale_y_continuous(name = "Weight (grams)") +
theme(legend.position = "top")
The weight of the rats in group 1 is lower at the start of the follow-up compared to the rats in groups 2 and 3, and stays lower during the follow-up.
Fit a linear regression model where Weight is the outcome and Group and Time are the explanatory variables:
We are making the (highly unlikely!) assumption that the consecutive weights of the same animal are independent:
RATS_reg <- lm(Weight ~ Time + Group, data=RATSL)
summary(RATS_reg)
##
## Call:
## lm(formula = Weight ~ Time + Group, data = RATSL)
##
## Residuals:
## Min 1Q Median 3Q Max
## -60.643 -24.017 0.697 10.837 125.459
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 244.0689 5.7725 42.281 < 2e-16 ***
## Time 0.5857 0.1331 4.402 1.88e-05 ***
## Group2 220.9886 6.3402 34.855 < 2e-16 ***
## Group3 262.0795 6.3402 41.336 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 34.34 on 172 degrees of freedom
## Multiple R-squared: 0.9283, Adjusted R-squared: 0.9271
## F-statistic: 742.6 on 3 and 172 DF, p-value: < 2.2e-16
Weight is statistically significantly higher in groups 2 and 3 compared to group 1. The regression coefficient of time is smaller than 1 and statistically significant: the weight of the animals goes down during the follow-up.
Fit a random intercept model using the same two explanatory variables, Time and Group. To allow the rats to have a different weight at the start of the follow-up we use the identity of each rat as the random effect:
library(lme4)
RATS_ref <- lmer(Weight ~ Time + Group + (1 | ID), data = RATSL, REML = FALSE)
summary(RATS_ref)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: Weight ~ Time + Group + (1 | ID)
## Data: RATSL
##
## AIC BIC logLik deviance df.resid
## 1333.2 1352.2 -660.6 1321.2 170
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.5386 -0.5581 -0.0494 0.5693 3.0990
##
## Random effects:
## Groups Name Variance Std.Dev.
## ID (Intercept) 1085.92 32.953
## Residual 66.44 8.151
## Number of obs: 176, groups: ID, 16
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 244.06890 11.73107 20.80
## Time 0.58568 0.03158 18.54
## Group2 220.98864 20.23577 10.92
## Group3 262.07955 20.23577 12.95
##
## Correlation of Fixed Effects:
## (Intr) Time Group2
## Time -0.090
## Group2 -0.575 0.000
## Group3 -0.575 0.000 0.333
Even after allowing different weight of the animals at the start of the follow-up, the animals in groups 2 and 3 are heavier than those group 1, and the weight of the animals decreases during the follow-up.
Add random slope to the model of the rat growth data. Using a random intercept and random slope model allows the linear regression fits for each animal to differ in intercept and slope. This way we take into account that the rats start with different weights, and their weights might change over time at different rates, as well as analyse the effect of time in general:
RATS_ref1 <- lmer(Weight ~ Time + Group + (Time | ID), data = RATSL, REML = FALSE)
summary(RATS_ref1)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: Weight ~ Time + Group + (Time | ID)
## Data: RATSL
##
## AIC BIC logLik deviance df.resid
## 1194.2 1219.6 -589.1 1178.2 168
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.2261 -0.4322 0.0555 0.5638 2.8827
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## ID (Intercept) 1140.5363 33.7718
## Time 0.1122 0.3349 -0.22
## Residual 19.7456 4.4436
## Number of obs: 176, groups: ID, 16
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 246.45727 11.81526 20.859
## Time 0.58568 0.08548 6.852
## Group2 214.58736 20.17983 10.634
## Group3 258.92732 20.17983 12.831
##
## Correlation of Fixed Effects:
## (Intr) Time Group2
## Time -0.166
## Group2 -0.569 0.000
## Group3 -0.569 0.000 0.333
The animals in groups 2 and 3 are heavier than those in group 1; the weight decreases on average over time.
Compare the random intercept and random intercept and slope models by performing a likelihood ratio test:
anova(RATS_ref1, RATS_ref)
## Data: RATSL
## Models:
## RATS_ref: Weight ~ Time + Group + (1 | ID)
## RATS_ref1: Weight ~ Time + Group + (Time | ID)
## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
## RATS_ref 6 1333.2 1352.2 -660.58 1321.2
## RATS_ref1 8 1194.2 1219.6 -589.11 1178.2 142.94 2 < 2.2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The p-value is highly significant and the log-likelihood of the random intercept and random slope is greater than that of the random slope model. This suggests that it fits the data better: the fit is better the closer to 0 the log-likelihood of the model is.
To test if the growth profiles of the rats differ between the groups, fit a random intercept and slope model that allows for a Group times Time interaction:
RATS_ref2 <- lmer(Weight ~ Time + Group + Time * Group + (Time | ID), data = RATSL, REML = FALSE)
summary(RATS_ref2)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: Weight ~ Time + Group + Time * Group + (Time | ID)
## Data: RATSL
##
## AIC BIC logLik deviance df.resid
## 1185.9 1217.6 -582.9 1165.9 166
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.2669 -0.4249 0.0726 0.6034 2.7513
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## ID (Intercept) 1.107e+03 33.2763
## Time 4.925e-02 0.2219 -0.15
## Residual 1.975e+01 4.4436
## Number of obs: 176, groups: ID, 16
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 251.65165 11.80279 21.321
## Time 0.35964 0.08215 4.378
## Group2 200.66549 20.44303 9.816
## Group3 252.07168 20.44303 12.330
## Time:Group2 0.60584 0.14229 4.258
## Time:Group3 0.29834 0.14229 2.097
##
## Correlation of Fixed Effects:
## (Intr) Time Group2 Group3 Tm:Gr2
## Time -0.160
## Group2 -0.577 0.092
## Group3 -0.577 0.092 0.333
## Time:Group2 0.092 -0.577 -0.160 -0.053
## Time:Group3 0.092 -0.577 -0.053 -0.160 0.333
The interaction of time and weight is stronger in groups 2 and 3 than in group 1: the animals gain weight faster.
Compare the random intercept and random slope model to the random intercept and random slope with Time and Weight interaction using ANOVA:
anova(RATS_ref2, RATS_ref1)
## Data: RATSL
## Models:
## RATS_ref1: Weight ~ Time + Group + (Time | ID)
## RATS_ref2: Weight ~ Time + Group + Time * Group + (Time | ID)
## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
## RATS_ref1 8 1194.2 1219.6 -589.11 1178.2
## RATS_ref2 10 1185.9 1217.6 -582.93 1165.9 12.361 2 0.00207 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The model with Time and Weight interaction fits the data better, based on the smaller log-likelihood of the model. The difference is statistically significant.
Visualize the observed weights and the fitted values for weight from the last model:
ggplot(RATSL, aes(x = Time, y = Weight, group = ID)) +
geom_line(aes(linetype = Group)) +
scale_x_continuous(name = "Time (days)", breaks = seq(0, 60, 20)) +
scale_y_continuous(name = "Observed weight (grams)") +
theme(legend.position = "top")
Fitted <- fitted(RATS_ref2)
RATSL <- mutate(RATSL, Fitted=Fitted)
ggplot(RATSL, aes(x = Time, y = Fitted, group = ID)) +
geom_line(aes(linetype = Group)) +
scale_x_continuous(name = "Time (days)", breaks = seq(0, 60, 20)) +
scale_y_continuous(name = "Fitted weight (grams)") +
theme(legend.position = "top")